Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
A new approach for Gröbner bases conversion of polynomial ideals (over a field) of arbitrary dimension is presented. In contrast to the only other approach based on Gröbner fan and Gröbner walk for positive dimensional ideals, the proposed approach is simpler to understand, prove, and implement. It is based on defining for a given polynomial, a truncated sub-polynomial consisting of all monomials that can possibly become the leading monomial with respect to the target ordering: the monomials between the leading monomial of the target ordering and the leading monomial of the initial ordering. The main ingredient of the new algorithm is the computation of a Gröbner basiswith respect tothe target ordering for the ideal generated by such truncated parts of the input Gröbner basis. This is done using the extended Buchberger algorithm that also outputs the relationship between the input and output bases. That information is used in attempts to recover a Gröbner basis of the idealwith respect tothe target ordering. In general, more than one iteration may be needed to get a Gröbner basiswith respect tothe target ordering since truncated polynomials may miss some leading monomials. The new algorithm has been implemented inMapleand its operation is illustrated using an example. The performance of this implementation is compared with the implementations of other approaches inMaple.In practice, a Gröbner basiswith respect toa target ordering can be computed in a single iteration on most examples.more » « lessFree, publicly-accessible full text available May 1, 2026
-
In this article, we enrich McCarthy’s theory of extensional arrays with a length and a maxdiff operation. As is well-known, some diff operation (i.e., some kind of difference function showing where two unequal arrays differ) is needed to keep interpolants quantifier free in array theories. Our maxdiff operation returns the max index where two arrays differ; thus, it has a univocally determined semantics. The length function is a natural complement of such a maxdiff operation and is needed to handle real arrays. Obtaining interpolation results for such a rich theory is a surprisingly hard task. We get such results via a thorough semantic analysis of the models of the theory and of their amalgamation and strong amalgamation properties. The results are modular with respect to the index theory; we show how to convert them into concrete interpolation algorithms via a hierarchical approach realizing a polynomial reduction to interpolation in linear arithmetics endowed with free function symbols.more » « less
-
Algorithms for computing congruence closure of ground equations overuninterpreted symbols and interpreted symbols satisfying associativity andcommutativity (AC) properties are proposed. The algorithms are based on aframework for computing a congruence closure by abstracting nonflat terms byconstants as proposed first in Kapur's congruence closure algorithm (RTA97).The framework is general, flexible, and has been extended also to developcongruence closure algorithms for the cases when associative-commutativefunction symbols can have additional properties including idempotency,nilpotency, identities, cancellativity and group properties as well as theirvarious combinations. Algorithms are modular; their correctness and terminationproofs are simple, exploiting modularity. Unlike earlier algorithms, theproposed algorithms neither rely on complex AC compatible well-foundedorderings on nonvariable terms nor need to use the associative-commutativeunification and extension rules in completion for generating canonical rewritesystems for congruence closures. They are particularly suited for integratinginto the Satisfiability modulo Theories (SMT) solvers. A new way to viewGroebner basis algorithm for polynomial ideals with integer coefficients as acombination of the congruence closures over the AC symbol * with the identity 1and the congruence closure over an Abelian group with + is outlined.more » « less
-
VPNs (Virtual Private Networks) have become an essential privacy-enhancing technology, particularly for at-risk users like dissidents, journalists, NGOs, and others vulnerable to targeted threats. While previous research investigating VPN security has focused on cryptographic strength or traffic leakages, there remains a gap in understanding how lower-level primitives fundamental to VPN operations, like connection tracking, might undermine the security and privacy that VPNs are intended to provide.In this paper, we examine the connection tracking frameworks used in common operating systems, identifying a novel exploit primitive that we refer to as the port shadow. We use the port shadow to build four attacks against VPNs that allow an attacker to intercept and redirect encrypted traffic, de-anonymize a VPN peer, or even portscan a VPN peer behind the VPN server. We build a formal model of modern connection tracking frameworks and identify that the root cause of the port shadow lies in five shared, limited resources. Through bounded model checking, we propose and verify six mitigations in terms of enforcing process isolation. We hope our work leads to more attention on the security aspects of lower-level systems and the implications of integrating them into security-critical applications.more » « less
-
Abstract The word problem for a finite set of ground identities is known to be decidable in polynomial time using congruence closure, and this is also the case if some of the function symbols are assumed to be commutative or defined by certain shallow identities, called strongly shallow. We show that decidability in P is preserved if we add the assumption that certain function symbolsfareextensionalin the sense that$$f(s_1,\ldots ,s_n) \mathrel {\approx }f(t_1,\ldots ,t_n)$$ implies$$s_1 \mathrel {\approx }t_1,\ldots ,s_n \mathrel {\approx }t_n$$ . In addition, we investigate a variant of extensionality that is more appropriate for commutative function symbols, but which raises the complexity of the word problem to coNP.more » « less
-
The concept of uniform interpolant for a quantifier-free formula from a givenformula with a list of symbols, while well-known in the logic literature, hasbeen unknown to the formal methods and automated reasoning community for a longtime. This concept is precisely defined. Two algorithms for computingquantifier-free uniform interpolants in the theory of equality overuninterpreted symbols (EUF) endowed with a list of symbols to be eliminated areproposed. The first algorithm is non-deterministic and generates a uniforminterpolant expressed as a disjunction of conjunctions of literals, whereas thesecond algorithm gives a compact representation of a uniform interpolant as aconjunction of Horn clauses. Both algorithms exploit efficient dedicated DAGrepresentations of terms. Correctness and completeness proofs are supplied,using arguments combining rewrite techniques with model theory.more » « less
-
Kobayashi, Naoki (Ed.)Algorithms for computing congruence closure of ground equations over uninterpreted symbols and interpreted symbols satisfying associativity and commutativity (AC) properties are proposed. The algorithms are based on a framework for computing the congruence closure by abstracting nonflat terms by constants as proposed first in Kapur’s congruence closure algorithm (RTA97). The framework is general, flexible, and has been extended also to develop congruence closure algorithms for the cases when associative-commutative function symbols can have additional properties including idempotency, nilpotency and/or have identities, as well as their various combinations. The algorithms are modular; their correctness and termination proofs are simple, exploiting modularity. Unlike earlier algorithms, the proposed algorithms neither rely on complex AC compatible well-founded orderings on nonvariable terms nor need to use the associative-commutative unification and extension rules in completion for generating canonical rewrite systems for congruence closures. They are particularly suited for integrating into Satisfiability modulo Theories (SMT) solvers.more » « less
-
Several approaches toward quantifier-free interpolation algorithms of theories involving arrays have been proposed by extending the language using a binary function skolemizing the extensionality principle. In FoSSaCS 2021, the last three authors studied the enrichment of the McCarthy’s theory of extensional arrays with a maxdiff operation. This paper discusses the implementation of the interpolation algorithm proposed in FoSSaCS 2021 using the Z3 API. The implementation allows the user to choose iZ3, Mathsat, or SMTInterpol as interpolation engines. The tool returns a formula in SMTLIB2 format, which allows compatibility with model checkers and invariant generators using such a format. We compare our algorithm with state-of-the-art interpolation engines. Our experiments using unsatisfiable formulæ extracted with the model checker UAutomizer show the feasibility of our tool. For that purpose, we used C programs from the ReachSafety-Arrays and MemSafety-Arrays tracks of SV-COMP.more » « less
-
The concept of a uniform interpolant for a quantifier-free formula from a given formula with a list of symbols, while well-known in the logic literature, has been unknown to the formal methods and automated reasoning community. This concept is precisely defined. Two algorithms for computing the uniform interpolant of a quantifier-free formula in EUF endowed with a list of symbols to be eliminated are proposed. The first algorithm is non-deterministic and generates a uniform interpolant expressed as a disjunction of conjunction of literals, whereas the second algorithm gives a compact representation of a uniform interpolant as a conjunction of Horn clauses. Both algorithms exploit efficient dedicated DAG representations of terms. Correctness and completeness proofs are supplied, using arguments combining rewrite techniques with model theory.more » « less
An official website of the United States government

Full Text Available